Markov-switching model selection using Kullback–Leibler divergence

نویسندگان

  • Aaron Smith
  • Prasad A. Naik
  • Chih-Ling Tsai
چکیده

In Markov-switching regression models, we use Kullback–Leibler (KL) divergence between the true and candidate models to select the number of states and variables simultaneously. Specifically, we derive a new information criterion, Markov switching criterion (MSC), which is an estimate of KL divergence. MSC imposes an appropriate penalty to mitigate the overretention of states in the Markov chain, and it performs well in Monte Carlo studies with single and multiple states, small and large samples, and low and high noise. We illustrate the usefulness of MSC via applications to the U.S. business cycle and to media advertising. r 2005 Elsevier B.V. All rights reserved. JEL classification: C22; C52.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Model Confidence Set Based on Kullback-Leibler Divergence Distance

Consider the problem of estimating true density, h(.) based upon a random sample X1,…, Xn. In general, h(.)is approximated using an appropriate in some sense, see below) model fƟ(x). This article using Vuong's (1989) test along with a collection of k(> 2) non-nested models constructs a set of appropriate models, say model confidence set, for unknown model h(.).Application of such confide...

متن کامل

On the estimation and influence diagnostics for the zero–inflated Conway–Maxwell–Poisson regression model: A full Bayesian analysis

In this paper we develop a Bayesian analysis for the zero-inflated regression models based on the COM-Poisson distribution. Our approach is based on Markov chain Monte Carlo methods. We discuss model selection, as well as, develop case deletion influence diagnostics for the joint posterior distribution based on the ψ-divergence, which has several divergence measures as particular cases, such as...

متن کامل

KL-learning: Online solution of Kullback-Leibler control problems

We introduce a stochastic approximation method for the solution of an ergodic Kullback-Leibler control problem. A Kullback-Leibler control problem is a Markov decision process on a finite state space in which the control cost is proportional to a Kullback-Leibler divergence of the controlled transition probabilities with respect to the uncontrolled transition probabilities. The algorithm discus...

متن کامل

Min-Max Kullback-Leibler Model Selection

This paper considers an information theoretic min-max approach to the model selection problem. The aim of this approach is to select the member of a given parameterized family of probability models so as to minimize the worst-case KullbackLeibler divergence from an uncertain “truth” model. Uncertainty of the truth is specified by an upper-bound of the KL-divergence relative to a given reference...

متن کامل

Park Estimation of Kullback – Leibler divergence by local likelihood

Motivated from the bandwidth selection problem in local likelihood density estimation and from the problem of assessing a final model chosen by a certain model selection procedure, we consider estimation of the Kullback–Leibler divergence. It is known that the best bandwidth choice for the local likelihood density estimator depends on the distance between the true density and the ‘vehicle’ para...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2005